SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "LAR1:lu ;mspu:(conferencepaper);pers:(Spaanenburg Lambert);spr:eng;pers:(vanderZwaag B J)"

Search: LAR1:lu > Conference paper > Spaanenburg Lambert > English > VanderZwaag B J

  • Result 1-4 of 4
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Malki, Suleyman, et al. (author)
  • It takes a winner to take his share
  • 2003
  • In: Proceedings ProRisc?03. - 9073461391 ; , s. 517-522
  • Conference paper (peer-reviewed)abstract
    • Novelty detection is based on the creation of a space with similarity metric. It is discussed that the design of a neural detector is a compromise between promptness, universatility, robustness and sensitivity. The feed-forward topology is chosen from three alternatives for its ability to design that compromise by applying structural redundancy. A generic FPGA implementation supports the use in adaptive intelligent systems.
  •  
2.
  • Spaanenburg, Lambert, et al. (author)
  • Natural learning of neural networks by reconfiguration
  • 2003
  • In: SPIE Proceedings on Bioengineered and Bioinspired Systems. - : SPIE. - 1996-756X .- 0277-786X. ; 5119, s. 273-284
  • Conference paper (peer-reviewed)abstract
    • The communicational and computational demands of neural networks are hard to satisfy in a digital technology. Temporal computing solves this problem by iteration, but leaves a slow network. Spatial computing was no option until the coming of modern FPGA devices. The letter shows how a small feed-forward neural module can be configured on the limited logic blocks between RAM and multiplier macro’s. It is then described how by spatial unrolling or by reconfiguration a large modular ANN can be built from such modules.
  •  
3.
  •  
4.
  • vanderZwaag, B J, et al. (author)
  • Translating feed-forward nets to SOM-like maps
  • 2003
  • In: Proceedings ProRisc?03. - 9073461391 ; , s. 447-452
  • Conference paper (peer-reviewed)abstract
    • A major disadvantage of feedforward neural networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical transformation of a feed-forward network into a SOMlike structure such that its internal knowledge can be visually interpreted. This is particularly applicable to networks trained in the general classification problem domain.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-4 of 4
Type of publication
Type of content
peer-reviewed (4)
Author/Editor
Malki, Suleyman (2)
Alberts, R (1)
Slump, C H (1)
Slump, C (1)
University
Lund University (4)
Language
Research subject (UKÄ/SCB)
Engineering and Technology (4)
Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view